Essay � Society of mind, final paper

Greg Detre

Sunday, May 18, 2003

 

Introduction

Steve�s intro

[�]

The original way in which the C&C framework was proposed implied a generality and lack of anthropomorphism which requires further argument. We are going to start by attempting to make a species-independent argument about intelligence as compression by decomposition. We bring in the M6 framework to show how a mind that doesn�t decompose the world into symbols must be limited to some variation of stimulus-response.

We take the argument further, and explore how the C&C and M6 frameworks relate to each other, and the way that different situations are represented as different kinds of things, differences, causes and clauses at the various levels of the mind. This examination highlights the advantages, and the flaws of the C&C framework.

Finally, we propose some amendments and extensions to the C&C framework.

What can we say with confidence?

Intelligence as compression by decomposition

We feel that we can argue with some confidence that any intelligence, no matter how alien, must

the compression is important because of the lookup-table problem � combinatorial explosion

decomposition into symbols is important because the world is structured and regular � the same symbols keep cropping up, and indeed predict the world well � admittedly, the symbols have probably been fixed a bit by language and the need to communicate

it�s worth saying that there is, of course, considerably more to intelligent behaviour than simply decomposing the world in a useful, predictive way � for starters, you want to decompose it in many different ways, according to need and to give flexibility � secondly, we have to recompose the symbols in different ways.

is there anything more to intelligence than decomposing and recomposing symbols??? not according to the AI people, right???

---

at the risk of adding to the canon of extant just-so stories, i am going to try and sketch an argument, resting on a minimum of assumptions, for why

the TDC framework should be arise in any thinking thing

where there�s life (process that maintains itself, transduces, reproduces) there�s scarcity

where�s there�s scarcity, there�s death

where there�s death, there�s selection (assuming that there is a source of variation)

where there�s selection, there�s pressure to exploit the world increasingly efficiently

doesn�t there also have to be heritance???

where there�s pressure to exploit the world increasingly efficiently, there will be an arms race � this will lead to increasingly close-fitting and complex models of the world

more complex models require longer and more memory to learn + remember � if you can represent them more compactly and generate all the data you need, then you can have a more complex model without the memory + learning-time costs

compression can be expressed in terms of the MDL � the more structure there is in the world, the more you benefit from really good, abstract, compact, generative models because you can really reduce the number of uninformative noisy exceptions that you need to capture the data

in other words, having good models helps you benefit, which means that you�re more likely to survive and propagate your genes, which means that those genes are more likely to proliferate,

can you not think of genes in non-genetic terms, but more as just something that gets passed to the next generation � could you think of memes as inherited???

more compact representations allow you to know more about the world � he�s arguing that the most compact representations are things, differences and causes � why though?

the most useful kind of knowledge that you can have is what to do in order to maximise reward

in order to do that, you need to have a sense of choice, or at least of alternatives �

I think we can see the first 3 levels of the model 6 as being a kind of obvious, obvious track for the intelligent arms race above

once you�ve got as far as level 3, you�ve got the ability to choose between alternatives

so then you can make use of knowledge to direct your actions� hmmm � this isn�t going anywhere

so, to backtrack: the most useful kind of knowledge that you can have is what to do in order to maximise reward

there are consequences you like � you need some sort of reward function then, right??? � these are goals???

anyway, causes are those things (or differences???) that are most richly generative of the consequences you like

as soon as you have a means of representing hypotheticals, you can compare between present, past + possible worlds

or to put it another way, as soon as you have a means of comparing� then what??? this doesn�t work � you need to put it the other way as above

as soon as you have a basic, first-order intentionality predictive model of others, it will get more complex

then, as before, you will try and break it into compositions because they�re better for storage (and we know that minds are structured, hence compressible and decomposable)

and what�s better for storage is better for generation

once you start to represent other people as decomposable, then what will you find to be the best way of decomposing them??? beliefs and desires??? their own things, differences + causes??? goals??? consequences??? things, differences and causes about the way they work

the thing is, I�ve justified TDC at some levels, but by no means at all � and indeed, the argument will become increasingly tenuous if I try to � the question is why TDC at every level??? you might be forgiven for thinking that it would be great for certain levels but not for others�

does level 3 (deliberative) have TDC??? of what kind??? see steve�s paper � it definitely has difference � and I suppose each complete hypothetical scenario is a thing � there�s a single cause, which is the best scenario which causes your behaviour � but I don�t think that there are component notions of cause within the different scenarios

Aunt Bertha

There is a knockdown argument, used by Ned Block to attack the validity of the Turing test as an infallible indicator of intelligence, that we initially worried might be applicable here. It might go something as follows: we might imagine an alien species with a very, very large head, that doesn�t do any learning whatsoever, but it is born with a huge list of rules for what to do in any of the situations it will ever encounter. We can see this as a huge instinctive-level only brain. Within the terms of the argument, such an alien would be intelligent, at least by any behavioural test we can devise, but would have no need to decompose the world into C&C. This worried me briefly, but it needn�t, for two reasons.

Firstly, the combinatorial explosion involved in passing even a five-minute Turing test reliably and administered imaginatively would be altogether impossible to fit even in a brain the size of a planet � Block�s argument is intended to be an a priori one, and therefore not relevant to us, given that we�re only concerned with what is physically possible. No intelligent alien with just a massive lookup-table for a brain could actually exist.

Secondly, such an alien would never learn. If its environment was to change in even the slightest way from that for which it had evolved, its rules wouldn�t work. What if we were to try and add a second level, a learning level, which simply associated stimuli with responses, remembering them all and applying the ones that worked best? But it would be evident from the trial-and-error nature of such a being�s behaviour that it lacked true intelligence. No two situations will ever be identical �

need rules for judging similarity � rules are decompositions, focusing upon features

if we assume that the environment changes very slowly and doesn�t punish caution /failure � might a very long-lived such alien be able to manage by slowly learning and exploring the space of possible situations

Could we not add a third level? It becomes more difficult to imagine what such a level would look like if we refuse to allow decomposition. One way of seeing the deliberative level is as a predictive, feedforward model. Such a model could be trained using supervised learning over course of the alien�s lifetime, so that when faced with a new situation, it would try and generate the implications of a given action, in order to decide in advance whether to execute it. The question is whether it makes sense to talk of any sort of feedforward model that doesn�t decompose the situation into features of some description. The fact that we term the features �sub-symbolic� seems to make them somehow differentiable from real, symbolic features. But this is a mistake � all �sub-symbolic� means is that the features being detected are at a lower level or divide up the space in a different way from the features that we happen to be used to. Any sort of prediction requires figuring out what�s similar about the present case to the past case. Unless they�re identical, this means paying closer attention to some features at the expense of others. This is decomposition.

There were two reasons for setting up this staw man argument

show how powerful the idea of mental levels is � can�t do anything intelligent with just the first three levels

intelligence is decomposition � a straw man argument for the possibility of a lookup-table alien was to show from the opposite angle that you could not have intelligence without decomposition, having first made the positive argument for how decomposition necessarily comes about through evolution.

is there any way to differentiate between the kinds of sub-symbolic features that NNs detect, and the kinds of features we�re talking about??? and indeed, does it even make sense to talk of sub-symbolic � after all, that just means that they�re at a lower level than the features that we�re used to seeing � not that they aren�t in fact symbols in their own right, right???

so, do they have to be special or abstract features???

how do you represent actions without decomposition? a full-body posegraph, I suppose�??? :(

that�s not the question � the question is how you choose the shortlist of actions to run through the feedforward model

can you imagine the fourth level without decomposition???

could you imagine intelligence without the fourth/fifth levels and above???

this would need a static environment � the whole point is that if it changes, the rules don�t work, and so it doesn�t learn, which is the whole point of intelligence

Assumptions required

scarcity � proved

evolution

natural selection � occurs whenever there�s scarcity

variation � assume this

inheritance � this is more problematic � either you can have creatures that don�t die, or you need some sort of inheritance mechanism � can you even assume reproduction

composition as following from an intellect arms race � proved

things/differences as the best way to decompose??? � not proved

need for consequences/goals in order to explain the need for causes???

causes as meta-analogies??? � take two pairs of situations (clauses), see the difference between them,

Steve section

 

 

Thing

Difference

Cause

Clause

1.          Instinctive

e.g. pulls hand away after touching a hot stove

low-level sensory perception

change in that perception

i.e. the addition of heat/pain, right

cause � calculated by the body

 

2.          Learned

 

 

 

 

3.          Deliberative

 

 

 

 

4.          Reflective

 

 

 

 

5.          Self-reflective

 

 

 

 

6.          Self-conscious

 

 

 

 

 

What we believe is a more profitable way think about these two ideas in combination is as a 6x4 grid, which allows all parts of the Causes and Clauses framework to exist within all levels of model six.

Let us imagine how the Causes and Clauses framework fit into each of the levels of model six.Let us begin with the Instinctive level and take the example of a person who, by reflex, pulls their hand away after touching a hot stove.We can identify the basic Thing to be low-level sensory perception, and the Difference to be a change in that perception.In this case, the perception is that of the regular hand and the burning hand, and the Difference is between those two perceptions.Here, the cause is roughly calculated by the body, as the hand is pulled away in the proper direction opposite from the source of pain.While there may not be a detailed concept of what that cause was, the fact that the hand moves correctly to avoid the pain (rather than say, towards or along the stove), suggests that cause is known in some form.The Clause in this domain can be thought of as the episodic memory of that reflex reaction that is saved in the nervous system.

Up on the level of Learned reactions, we find a different grouping of ideas under the Causes and Clauses framework can be applied.Here, we can see Things as nouns and Differences as Verbs.Causes can be thought of as the agent responsible for carrying out the action in verbs.Clauses are the episodic memory of combining Things, Differences, and Causes.This is probably the most natural level for thinking about the Causes and Clauses framework.In this case, a clause like: �don�t touch the stove because it is hot� identifies the stove as a Thing, the difference between touching and not touching the stove as a Difference, and excessive heat as a Cause.

On the level of Deliberative thinking, it makes more sense to think about Things as transitions and Differences between transitions.As before, Causes are the explanation for there being a Difference between Things.We might think of the following Clause that illustrates these roles:�When I bake a cake today, it is okay to touch the handle of the stove, but not the stove, because it is hot�.In this case, the transitions concern such transitions as baking a cake (from not baked to baked), touching the handle (from not touched to touched), and touching the stove (from not touched to touched).As these transitions are compared, their Differences become obvious (touching the handle versus touching the stove).Here, the Cause implies a further transition which is not stated in the clause--that your hand will go from not painful to painful if you touch the stove.This transition serves as an explanation for the difference between touching the stove and the handle.

As we move above the level of Deliberative thinking, we can begin to see a pattern where the Things one level up are the Clauses from the level below.For Reflective thinking, a Thing can be thought of as a Clause from the Deliberative level, or a �deliberation�.Differences are formed between these deliberations.Causes are the rationale you have for coming up with those plans, and Clauses are the records of that process.Here�s an example of a Clause in reflective thinking using this framework:�When I planned to touch the handle of the stove rather than touch the stove itself, I still burned myself.�� I didn�t consider that the handle would be too hot, because it looked insulated�.The deliberations are the plan to only touch the handle of the stove, as well as the consideration that the handle would be too hot.The Cause in this case is the rationale that the handle looked insulated.

Self-reflective thinking follows the same pattern started below in Reflective thinking.Its Things are �reflections�, its Differences are comparisons between those reflections, and its Causes are the rationalizations for those Differences.An example Clause for this level would be: �How could I have failed to realize that the handle was going to burn me before touching it?I must have been distracted.�

Finally, Self-conscious emotions are the crowning layer in this recursive process.Here, Things are �self-reflections�, and differences are found by comparing the �self-reflections� that different people you know might place upon your actions.An example clause might be: �If anyone else had seen me burn myself on the stove handle, they would have laughed at my ignorance.�

Now that we have outlined how these ideas might be more profitably connected, let us examine a few trends that this explanation appears to signal.First of all,one can see the chaining process suggested by the Causes and Clauses framework operate as one goes up the levels.Because higher levels use the structures created on lower levels, an important feature of the framework is conserved in this combination.Secondly, we notice that Clauses become more complicated sentences as one goes up.Lastly we notice that this treatment demonstrates that Causes become less necessary as we move upwards through the levels.

 

Extensions to C&C

Reinforcement

We have tried to show how the C&C framework might fit

We am going to argue that there is something missing from the C&C, though it affects the way that it has been described all the same. The C&C framework needs to take into account what could be variously termed �reward�, �value� or �salience�. We will argue that this explains the strange-seeming omission of goals from the C&C framework, and how the idea of a goal drops out of the C&C framework quite naturally if we have a notion of reward.

Reward (and punishment) are ultimately grounded in evolutionary adaptiveness. But it needs to be decomposed too, since we don�t know what will prove evolutionarily adaptive for every situation. The only way we can guess is by associating particular situations in the past with the reward received. Unfortunately, this is an enormously difficult problem, since rewards are not immediate, and because we need to know what about those situations elicited the reward.

 

 

add description/adjective to C&C???

adjectives as differences between actualThings and prototypeThings, i.e. between what you�re seeing now and your abstract conception of them???

argue that TDC (things, differences, causes) are in some sense operations

argue that the notion of clause is underdetermined, nebulous and redundant

need for consequences/goals??? � show that causes + consequences are complements, both being a way of picking salient (i.e. rewarding) aspects of the situation out to focus on, causes being valuable/salient aspects before the action and consequences being valuable/salient aspects that happen after the action

goals and consequences are equivalent???

C&C3 just-so story